top fisher
Interactive Label Cleaning with Example-based Explanations
The number of cleaned counter-examples across data sets and models is more than 30% of the total number of cleaned examples. FIM-based approaches outperform the LISSA estimator. FIM, which is difficult to store and invert. Figure 3 shows the results of the evaluation of Top Fisher, Practical Fisher and nearest neighbor (NN). As reported in the main text, Practical Fisher lags behind Top Fisher in all cases.
- North America > United States > Indiana > Hamilton County > Fishers (0.25)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.09)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.05)
- North America > United States > Indiana > Hamilton County > Fishers (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Interactive Label Cleaning with Example-based Explanations
The number of cleaned counter-examples across data sets and models is more than 30% of the total number of cleaned examples. FIM-based approaches outperform the LISSA estimator. FIM, which is difficult to store and invert. Figure 3 shows the results of the evaluation of Top Fisher, Practical Fisher and nearest neighbor (NN). As reported in the main text, Practical Fisher lags behind Top Fisher in all cases.
- North America > United States > Indiana > Hamilton County > Fishers (0.25)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.09)
- Europe > Italy > Trentino-Alto Adige/Südtirol > Trentino Province > Trento (0.05)
- North America > United States > New York (0.04)
- North America > United States > Indiana > Hamilton County > Fishers (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Interactive Label Cleaning with Example-based Explanations
Teso, Stefano, Bontempelli, Andrea, Giunchiglia, Fausto, Passerini, Andrea
We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming examples that look ``suspicious'' to the model. As a consequence, those mislabeled examples that elude (or don't undergo) this cleaning step end up tainting the training data and the model with no further chance of being cleaned. We propose Cincer, a novel approach that cleans both new and past data by identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, Cincer identifies a counter-example in the training set that -- according to the model -- is maximally incompatible with the suspicious example, and asks the annotator to relabel either or both examples, resolving this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve as explanations of the model' suspicion, and highly influential, so to convey as much information as possible if relabeled. Cincer achieves this by leveraging an efficient and robust approximation of influence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows that clarifying the reasons behind the model's suspicions by cleaning the counter-examples helps acquiring substantially better data and models, especially when paired with our FIM approximation.